Dimensionally consistent learning with Buckingham Pi
نویسندگان
چکیده
In the absence of governing equations, dimensional analysis is a robust technique for extracting insights and finding symmetries in physical systems. Given measurement variables parameters, Buckingham Pi theorem provides procedure set dimensionless groups that spans solution space, although this not unique. We propose an automated approach using symmetric self-similar structure available data to discover best collapse these lower space according optimal fit. develop three data-driven techniques use as constraint: (1) constrained optimization problem with non-parametric input–output fitting function, (2) deep learning algorithm (BuckiNet) projects input parameter dimension first layer (3) based on sparse identification nonlinear dynamics equations whose coefficients parameterize dynamics. explore accuracy, robustness computational complexity methods show they successfully identify example problems: bead rotating hoop, laminar boundary Rayleigh–Bénard convection. Three machine are developed discovering physically meaningful scaling parameters from data, constraint.
منابع مشابه
Consistent and Coherent Learning with δ -delay
A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ –delay relaxing the consistency deman...
متن کاملConsistent Multitask Learning with Nonlinear Output Relations
Key to multitask learning is exploiting the relationships between different tasks in order to improve prediction performance. Most previous methods have focused on the case where tasks relations can be modeled as linear operators and regularization approaches can be used successfully. However, in practice assuming the tasks to be linearly related is often restrictive, and allowing for nonlinear...
متن کاملRecency, Consistent Learning, and Nash Equilibrium Learning with Recency Bias
We examine the long-run implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs, and that both have a weighted universal consistency property. Using the limited memory model we are able to produce learning procedures that are both weighted universally consistent and converge with probability one to strict N...
متن کامل. The Soil Physics Contributions of Edgar Buckingham
years as a graduate assistant in the physics department. He did additional graduate work at the University of During 1902 to 1906 as a soil physicist at the USDA Bureau of Strasbourg and the University of Leipzig, where he studSoils (BOS), Edgar Buckingham originated the concepts of matric ied under chemist Wilhelm Ostwald, who won the Nobel potential, soil–water retention curves, specific wate...
متن کاملConsistent and Conservative Iterative Learning
The present study aims at insights into the nature of incremental learning in the context of Gold’s model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way differe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Nature Computational Science
سال: 2022
ISSN: ['2662-8457']
DOI: https://doi.org/10.1038/s43588-022-00355-5